Local Rely-Guarantee Reasoning

Local Rely-Guarantee Reasoning, Xinyu Feng. Accepted for publication at POPL 2009.

Rely-Guarantee reasoning is a well-known method for verification of shared-variable concurrent programs. However, it is difficult for users to define rely/guarantee conditions, which specify threads' behaviors over the whole program state. Recent efforts to combine Separation Logic with Rely-Guarantee reasoning have made it possible to hide thread-local resources, but the shared resources still need to be globally known and specified. This greatly limits the reuse of verified program modules.

In this paper, we propose LRG, a new Rely-Guarantee-based logic that brings local reasoning and information hiding to concurrency verification. Our logic, for the first time, supports a frame rule over rely/guarantee conditions so that specifications of program modules only need to talk about the resources used locally, and the certified modules can be reused in different threads without redoing the proof. Moreover, we introduce a new hiding rule to hide the resources shared by a subset of threads from the rest in the system. The support of information hiding not only improves the modularity of Rely-Guarantee reasoning, but also enables the sharing of dynamically allocated resources, which requires adjustment of rely/guarantee conditions.

In the beginning there was Hoare logic, which taught us how to reason about sequential, imperative programs. Then, Owicki and Gries extended Hoare logic with some additional rules that enabled reasoning about some concurrent imperative programs. This was good, but there were a lot of "obviously correct" concurrent programs that it couldn't handle. So Owicki-Gries logic begat two children.

The elder child was Jones's introduction of the rely-guarantee method. The intuition here is that if you have two subprograms M1 and M2, and M1 will work in an environment with a working M2, and M2 will work in an environment with a working M1, then when you put the two together you have a working M1 and M2. This is a really powerful reasoning method, but unfortunately it's not terribly modular.

The younger child of Owicki-Gries was concurrent separation logic. The intuition behind it is that if you can divide the heap into disjoint (logical) pieces, and only let one process access each chunk at a time, then you can't have any race conditions. This is a very simple principle, and permits modular, compositional reasoning about concurrent programs -- even pointer programs. But there are programs that can't be proven in this style.

So the obvious thing to want is the ability to combine these two styles of reasoning. Unfortunately, this is hard -- there have been several logics proposed to do this, each of which does a bit better than the last. Feng's is the latest, and the best I've seen so far. (Though concurrency is not really my area.)

An interesting point is that these kinds of reasoning principles, while invented for the concurrent world, are also interesting for reasoning about modular sequential programs. This is because when you create imperative abstractions, it's helpful to be able to give up knowledge about exactly when state changes can happen. So you need the same sorts of techniques to handle this kind of conceptual nondeterminism that you need for the actual nondeterminism of parallel hardware.

Solutions to SICP Exercises

SICP gets many nods when it comes to introductory texts to programming and the study of PLs. I've been slowly working my way through SICP in a number of different PLs, most notably Oz and Alice ML. In that process, I've come across Eli Bendersky's methodical solutions to the SICP Exercises in a series of blog posts. His review of SICP is instructive of the role of the exercises:

A word about exercises in SICP. They are numerous, some of them are hard, but the exercises are the best way to really understand what the book tries to teach. In a manner, they’re as integral part of the book as the text itself. The exercises are very well prepared and lead the reader through the examples coded by the authors into greater understanding of the topics taught. At times, it feels like the exercises are specifically designed to force you to think about the essence of the topics, and not just grasp them superficially.

Highly recommended reading for anyone that is working their way through SICP. Unlike my own work, which concentrates solely on code, his explanations are quite good. He uses mostly Common Lisp for the solutions, though resorting to Scheme when it makes for more concise solutions.

Microsoft Oslo

It seems that Oslo is going to get a lot of press soon, and I don't think we have discussed it yet, nor am I sure I really understand what it's all about...

We have been following Microsoft's DSL and modeling project on and off for a couple of years, and Oslo seems to be another step on this road. The buzz seems to be about visual development, a textual DSL, and development by non-developers (which is probably not the same as end-user programming).

eWeek has a short discussion of The Origins of Microsoft's Oslo Software Modeling Platform. If you have links to more informative resources, or insights to share, please do.

JVM Language Summit report

Tim Bray reports about half of the JVM language summit. Among the things he discusses are Clojure, PHP and JVM/CLR cross-pollination.

MISRA C++:2008

Probably worth noting MISRA-C++. Not much to go on since you have to pay for the document that outlines the standards.

MISRA C++:2008 was finally released on June 5, 2008, following 3 years of work by a group of willing volunteers. They set out to craft a set of rules for the safe use of C++ in critical systems...

It seems not so long ago that the insurrection to fork a safer subset of C++ in Europe was suppressed. Instead of redefining the language, the efforts are now on trying to enforce coding standards and best practices. Try to solve things on the engineering side, rather than the programming language specification side.

Intel Ct: C for Throughput Computing

Intel is working on a C++ extension called Ct to simplify multicore programming and data parallelism more properly than they did with previous library efforts like TBB.

One of the main challenges in scaling multi-core for the future is that of migrating programming tools, build environments, and millions of lines of existing code to new parallel programming models or compilers. To help this transition, Intel researchers are developing “Ct,” or C/C++ for Throughput Computing.

Ct is not intended to be a C++ dialect or replacement but rather a tricky integration of a runtime engine with existing C++ compiler environments.

In principal, Ct works with any standard C++ compiler because it is a standards-compliant C++ library (with a lot of runtime behind the scenes). When one initializes the Ct library, one loads a runtime that includes the compiler, threading runtime, memory manager — essentially all the components one needs for threaded and/or vectorized code generation.

Simon Peyton Jones Interview

A Simon Peyton Jones interview as part of the series The A-Z of Programming Languages that Naomi Hamilton has been putting together. Posting this one to the front page, not because of any bias towards functional programming, so much as it stands on its own as interesting and insightful from the standpoint of programming language design and evolution.

To supplant established languages, even in the functional programming area, like Haskell or ML or Scheme, you have to build a language that’s not only intriguing and interesting, and enables people to write programs faster, but you also need an implementation that can handle full scale applications and has lots of libraries and can handle profilers and debuggers and graphical analyzers… there’s a whole eco-system that goes with a programming language, and it’s jolly hard work building that up.

AgentSpeak(L): programming with beliefs, desires and intentions

Anand S. Rao (1996). AgentSpeak(L): BDI Agents speak out in a logical computable language.

Rao's AgentSpeak(L) is a Prolog-like resolution-based language, but which is extended to support agent-based programming in several ways, most importantly:

  1. It extends the language, so that clauses can talk about not just satisfaction of predicates, but also of an agent desiring to bring about a predicate, and desiring to find out whether a predicate is true; and to distinguish between normal goals and special goals relevant to the BDI model (Belief-Desire-Intention model);
  2. It amends the resolution engine to support what Rao calls reactive concurrency, where agents form plans via a process resembling SLD-resolution, but plans are formed or abandoned on the basis of agent-internal reactions called triggering events.

Rao and Georgeff's work on BDI agents and procedural reasoning together constitutes one of the most important contributions to the theory of agents in AI, a topic which hasn't been discussed much here on LtU, but was raised in the Agent Oriented Programming story.

SourceIDE: A Semi-live Cross-development IDE for Cola

SourceIDE: A Semi-live Cross-development IDE for Cola Scott Wallace, Viewpoints Research Institute, 2008

This paper describes the first phase of a long-term effort that will culminate in the availability of native, incremental, “live,” interactive development environments for the “Cola” family of languages.

Here's a peek at bootstrapping a new programming environment. The author has written an IDE for the new Cola programming language by adapting Squeak's IDE to operate on external source files written in a language other than Smalltalk. This was done as temporary scaffolding to help develop (amongst other things) a successor IDE written in the target language itself. Oj what a lot of disposable code to write for bootstrap!

This is a part of Alan Kay and Viewpoints Research's Inventing Fundamental New Computing Technologies project. I've just started hacking on this project myself and I'm really excited so expect more about it in the coming weeks!

General admin notes

We are experiencing a surge of new members, and that's great! We always value new members.

Let me remind everyone to pursue the policies document (available through the FAQ page). I want to emphasize two policy items in particular: We discourage nicknames, and when they are used encourage members to provide a url of a home page or related information in their profile. Second, LtU is in general not intended for detailed design discussions. More relevant forums are listed in the policies document.